Timeline Of Information Theory
   HOME

TheInfoList



OR:

A timeline of events related to  
information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
,  
quantum information theory Quantum information is the information of the quantum state, state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information re ...
and
statistical physics Statistical physics is a branch of physics that evolved from a foundation of statistical mechanics, which uses methods of probability theory and statistics, and particularly the Mathematics, mathematical tools for dealing with large populations ...
,  
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
,  
error correcting code In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea is ...
s and related subjects. * 1872
Ludwig Boltzmann Ludwig Eduard Boltzmann (; 20 February 1844 – 5 September 1906) was an Austrian physicist and philosopher. His greatest achievements were the development of statistical mechanics, and the statistical explanation of the second law of thermodyn ...
presents his
H-theorem In classical statistical mechanics, the ''H''-theorem, introduced by Ludwig Boltzmann in 1872, describes the tendency to decrease in the quantity ''H'' (defined below) in a nearly-ideal gas of molecules. L. Boltzmann,Weitere Studien über das Wä ...
, and with it the formula Σ''p''i log ''p''i for the entropy of a single gas particle * 1878
J. Willard Gibbs Josiah Willard Gibbs (; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in t ...
defines the
Gibbs entropy The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entrop ...
: the probabilities in the entropy formula are now taken as probabilities of the state of the ''whole'' system * 1924
Harry Nyquist Harry Nyquist (, ; February 7, 1889 – April 4, 1976) was a Swedish-American physicist and electronic engineer who made important contributions to communication theory. Personal life Nyquist was born in the village Nilsby of the parish Stora Ki ...
discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system * 1927
John von Neumann John von Neumann (; hu, Neumann János Lajos, ; December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. He was regarded as having perhaps the widest cove ...
defines the
von Neumann entropy In physics, the von Neumann entropy, named after John von Neumann, is an extension of the concept of Gibbs entropy from classical statistical mechanics to quantum statistical mechanics. For a quantum-mechanical system described by a density matrix ...
, extending the Gibbs entropy to quantum mechanics * 1928
Ralph Hartley Ralph Vinton Lyon Hartley (November 30, 1888 – May 1, 1970) was an American electronics researcher. He invented the Hartley oscillator and the Hartley transform, and contributed to the foundations of information theory. Biography Hartley was ...
introduces
Hartley information The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set ''A'' uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function : H_0(A) ...
as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning) * 1929
Leó Szilárd Leo Szilard (; hu, Szilárd Leó, pronounced ; born Leó Spitz; February 11, 1898 – May 30, 1964) was a Hungarian-German-American physicist and inventor. He conceived the nuclear chain reaction in 1933, patented the idea of a nuclear ...
analyses
Maxwell's Demon Maxwell's demon is a thought experiment that would hypothetically violate the second law of thermodynamics. It was proposed by the physicist James Clerk Maxwell in 1867. In his first letter Maxwell called the demon a "finite being", while the ' ...
, showing how a Szilard engine can sometimes transform information into the extraction of useful work * 1940
Alan Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical com ...
introduces the
deciban The hartley (symbol Hart), also called a ban, or a dit (short for decimal digit), is a logarithmic unit that measures information or entropy, based on base 10 logarithms and powers of 10. One hartley is the information content of an event if th ...
as a measure of information inferred about the German Enigma machine cypher settings by the
Banburismus Banburismus was a cryptanalytic process developed by Alan Turing at Bletchley Park in Britain during the Second World War. It was used by Bletchley Park's Hut 8 to help break German ''Kriegsmarine'' (naval) messages enciphered on Enigma machin ...
process * 1944
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American people, American mathematician, electrical engineering, electrical engineer, and cryptography, cryptographer known as a "father of information theory". As a 21-year-o ...
's theory of information is substantially complete * 1947 Richard W. Hamming invents Hamming codes for error detection and correction (to protect patent rights, the result is not published until 1950) * 1948 Claude E. Shannon publishes ''
A Mathematical Theory of Communication "A Mathematical Theory of Communication" is an article by mathematician Claude E. Shannon published in '' Bell System Technical Journal'' in 1948. It was renamed ''The Mathematical Theory of Communication'' in the 1949 book of the same name, a sm ...
'' * 1949 Claude E. Shannon publishes ''Communication in the Presence of Noise'' –
Nyquist–Shannon sampling theorem The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that pe ...
and Shannon–Hartley law * 1949 Claude E. Shannon's ''
Communication Theory of Secrecy Systems "Communication Theory of Secrecy Systems" is a paper published in 1949 by Claude Shannon discussing cryptography from the viewpoint of information theory. It is one of the foundational treatments (arguably ''the'' foundational treatment) of modern c ...
'' is declassified * 1949
Robert M. Fano Roberto Mario "Robert" Fano (11 November 1917 – 13 July 2016) was an Italian-American computer scientist and professor of electrical engineering and computer science at the Massachusetts Institute of Technology. He became a student and working ...
publishes ''Transmission of Information''. M.I.T. Press, Cambridge, Massachusetts –
Shannon–Fano coding In the field of data compression, Shannon–Fano coding, named after Claude Shannon and Robert Fano, is a name given to two different but related techniques for constructing a prefix code based on a set of symbols and their probabilities (estimat ...
* 1949 – Leon G. Kraft discovers Kraft's inequality, which shows the limits of prefix codes * 1949 Marcel J. E. Golay introduces Golay codes for
forward error correction In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea is ...
* 1951
Solomon Kullback Solomon Kullback (April 3, 1907August 5, 1994) was an American cryptanalyst and mathematician, who was one of the first three employees hired by William F. Friedman at the US Army's Signal Intelligence Service (SIS) in the 1930s, along with Frank ...
and
Richard Leibler Richard A. Leibler (March 18, 1914, Chicago, Illinois – October 25, 2003, Reston, Virginia) was an American mathematician and cryptanalyst. Richard Leibler was born in March 1914. He received his A.M. in mathematics from Northwestern University ...
introduce the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fro ...
* 1951
David A. Huffman David Albert Huffman (August 9, 1925 – October 7, 1999) was an American pioneer in computer science, known for his Huffman coding. He was also one of the pioneers in the field of mathematical origami. Education Huffman earned his bachelor's d ...
invents
Huffman encoding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algori ...
, a method of finding optimal prefix codes for
lossless Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistic ...
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
* 1953 August Albert Sardinas and George W. Patterson devise the
Sardinas–Patterson algorithm In coding theory, the Sardinas–Patterson algorithm is a classical algorithm for determining in polynomial time whether a given variable-length code is uniquely decodable, named after August Albert Sardinas and George W. Patterson, who published i ...
, a procedure to decide whether a given
variable-length code In coding theory a variable-length code is a code which maps source symbols to a ''variable'' number of bits. Variable-length codes can allow sources to be compressed and decompressed with ''zero'' error (lossless data compression) and still be ...
is uniquely decodable * 1954 Irving S. Reed and
David E. Muller David Eugene Muller (November 2, 1924 – April 27, 2008) was an American mathematician and computer scientist. He was a professor of mathematics and computer science at the University of Illinois (1953–92), after which he became an emeritu ...
propose
Reed–Muller code Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in ...
s * 1955
Peter Elias Peter Elias (November 23, 1923 – December 7, 2001) was a pioneer in the field of information theory. Born in New Brunswick, New Jersey, he was a member of the Massachusetts Institute of Technology faculty from 1953 to 1991. In 1955, Elias introdu ...
introduces
convolutional code In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of t ...
s * 1957
Eugene Prange Eugene August Prange (July 30, 1917 – February 12, 2006)Obituary
first discusses
cyclic code In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detecti ...
s * 1959 Alexis Hocquenghem, and independently the next year
Raj Chandra Bose Raj Chandra Bose (19 June 1901 – 31 October 1987) was an Indian American mathematician and statistician best known for his work in design theory, finite geometry and the theory of error-correcting codes in which the class of BCH codes is par ...
and Dwijendra Kumar Ray-Chaudhuri, discover
BCH code In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called ''Galois field''). BCH codes were invented in 1959 ...
s * 1960 Irving S. Reed and
Gustave Solomon Gustave Solomon (October 27, 1930 – January 31, 1996) was an American mathematician and electrical engineer who was one of the founders of the algebraic theory of error detection and correction. Career Solomon completed his Ph.D. in mathema ...
propose Reed–Solomon codes * 1962 Robert G. Gallager proposes low-density parity-check codes; they are unused for 30 years due to technical limitations * 1965
Dave Forney George David Forney Jr. (born March 6, 1940) is an American electrical engineer who made contributions in telecommunication system theory, specifically in coding theory and information theory. Biography Forney received the B.S.E. degree in elect ...
discusses
concatenated code In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both expo ...
s * 1966
Fumitada Itakura is a Japanese scientist. He did pioneering work in statistical signal processing, and its application to speech analysis, synthesis and coding, including the development of the linear predictive coding (LPC) and line spectral pairs (LSP) methods. ...
(
Nagoya University , abbreviated to or NU, is a Japanese national research university located in Chikusa-ku, Nagoya. It was the seventh Imperial University in Japan, one of the first five Designated National University and selected as a Top Type university of T ...
) and Shuzo Saito (
Nippon Telegraph and Telephone , commonly known as NTT, is a Japanese telecommunications company headquartered in Tokyo, Japan. Ranked 55th in Fortune Global 500, ''Fortune'' Global 500, NTT is the fourth largest telecommunications company in the world in terms of revenue, as w ...
) develop
linear predictive coding Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model. ...
(LPC), a form of
speech coding Speech coding is an application of data compression of digital audio signals containing speech. Speech coding uses speech-specific parameter estimation using audio signal processing techniques to model the speech signal, combined with generic da ...
* 1967
Andrew Viterbi Andrew James Viterbi (born Andrea Giacomo Viterbi, March 9, 1935) is an American electrical engineer and businessman who co-founded Qualcomm Inc. and invented the Viterbi algorithm. He is the Presidential Chair Professor of Electrical Engineeri ...
reveals the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
, making decoding of convolutional codes practicable * 1968
Elwyn Berlekamp Elwyn Ralph Berlekamp (September 6, 1940 – April 9, 2019) was a professor of mathematics and computer science at the University of California, Berkeley.Contributors, ''IEEE Transactions on Information Theory'' 42, #3 (May 1996), p. 1048. DO10.1 ...
invents the
Berlekamp–Massey algorithm The Berlekamp–Massey algorithm is an algorithm that will find the shortest linear-feedback shift register (LFSR) for a given binary output sequence. The algorithm will also find the minimal polynomial of a linearly recurrent sequence in an arbi ...
; its application to decoding BCH and Reed–Solomon codes is pointed out by James L. Massey the following year * 1968
Chris Wallace Christopher Wallace (born October 12, 1947) is an American broadcast journalist. He is known for his tough and wide-ranging interviews, for which he is often compared to his father, ''60 Minutes'' journalist Mike Wallace. Over his 50-year care ...
and David M. Boulton publish the first of many papers on
Minimum Message Length Minimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection. It provides a formal information theory restatement of Occam's Razor: even when models are equal in their measure of fit-accurac ...
( MML) statistical and inductive inference * 1970 Valerii Denisovich Goppa introduces
Goppa code In mathematics, an algebraic geometric code (AG-code), otherwise known as a Goppa code, is a general type of linear code constructed by using an algebraic curve X over a finite field \mathbb_q. Such codes were introduced by Valerii Denisovich Go ...
s * 1972 Jørn Justesen proposes Justesen codes, an improvement of Reed–Solomon codes * 1972 Nasir Ahmed proposes the discrete cosine transform (DCT), which he develops with T. Natarajan and K. R. Rao in 1973; the DCT later became the most widely used
lossy compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
algorithm, the basis for multimedia formats such as
JPEG JPEG ( ) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and imag ...
,
MPEG The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by International Organization for Standardization, ISO and International Electrotechnical Commission, IEC that sets standards for media coding, includ ...
and
MP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio developed largely by the Fraunhofer Society in Germany, with support from other digital scientists in the United States and elsewhere. Origin ...
* 1973
David Slepian David S. Slepian (June 30, 1923 – November 29, 2007) was an American mathematician. He is best known for his work with algebraic coding theory, probability theory, and distributed source coding. He was colleagues with Claude Shannon and Ri ...
and
Jack Wolf Jack Keil Wolf (March 14, 1935 – May 12, 2011) was an American researcher in information theory and coding theory. Biography Wolf was born in 1935 in Newark, New Jersey, and graduated from Weequahic High School in 1952. He received his under ...
discover and prove the
Slepian–Wolf coding __NOTOC__ In information theory and communication, the Slepian–Wolf coding, also known as the Slepian–Wolf bound, is a result in distributed source coding discovered by David Slepian and Jack Wolf in 1973. It is a method of theoretically codi ...
limits for distributed
source coding In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
* 1976
Gottfried Ungerboeck Gottfried Ungerboeck (born 15 March 1940, Vienna) is an Austrian communications engineer. Ungerboeck received an electrical engineering degree (with emphasis on telecommunications) from Vienna University of Technology in 1964, and a Ph.D. from th ...
gives the first paper on
trellis modulation In telecommunication, trellis modulation (also known as trellis coded modulation, or simply TCM) is a modulation scheme that transmits information with high efficiency over band-limited channels such as telephone lines. Gottfried Ungerboeck invent ...
; a more detailed exposition in 1982 leads to a raising of analogue modem POTS speeds from 9.6 kbit/s to 33.6 kbit/s * 1976 – Richard Pasco and Jorma J. Rissanen develop effective
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic ...
techniques * 1977 Abraham Lempel and
Jacob Ziv Jacob Ziv ( he, יעקב זיו; born 1931) is an Israeli electrical engineer who, along with Abraham Lempel, developed the LZ family of lossless data compression algorithms. Biography Ziv was born in Tiberias, British mandate Palestine, on 27 ...
develop Lempel–Ziv compression (
LZ77 LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations includin ...
) * 1989
Phil Katz Phillip Walter Katz (November 3, 1962 – April 14, 2000) was a computer programmer best known as the co-creator of the Zip file format for data compression, and the author of PKZIP, a program for creating zip files that ran under DOS. A c ...
publishes the .zip format including DEFLATE (LZ77 + Huffman coding); later to become the most widely used archive container * 1993 Claude Berrou, Alain Glavieux and Punya Thitimajshima introduce
Turbo code In information theory, turbo codes (originally in French ''Turbocodes'') are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely ...
s * 1994
Michael Burrows Michael Burrows, FRS (born 1963) is a British computer scientist and the creator of the Burrows–Wheeler transform, currently working for Google. Born in Britain, as of 2018 he lives in the United States, although he remains a British citizen. ...
and David Wheeler publish the
Burrows–Wheeler transform The Burrows–Wheeler transform (BWT, also called block-sorting compression) rearranges a character string into runs of similar characters. This is useful for compression, since it tends to be easy to compress a string that has runs of repeated c ...
, later to find use in bzip2 * 1995
Benjamin Schumacher Benjamin "Ben" Schumacher is an American theoretical physicist, working mostly in the field of quantum information theory. He discovered a way of interpreting quantum states as information. He came up with a way of compressing the information in ...
coins the term
qubit In quantum computing, a qubit () or quantum bit is a basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, ...
and proves the quantum noiseless coding theorem * 2003
David J. C. MacKay Professor Sir David John Cameron MacKay (22 April 1967 – 14 April 2016) was a British physicist, mathematician, and academic. He was the Regius Professor of Engineering in the Department of Engineering at the University of Cambridge and f ...
shows the connection between information theory, inference and machine learning in his book. * 2006 – first
Asymmetric numeral systems Asymmetric numeral systems (ANS)J. Duda, K. Tahboub, N. J. Gadil, E. J. Delp''The use of asymmetric numeral systems as an accurate replacement for Huffman coding'' Picture Coding Symposium, 2015.J. Duda''Asymmetric numeral systems: entropy coding ...
entropy coding: since 2014 popular replacement of Huffman and
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic ...
in compressors like Facebook Zstandard, Apple LZFSE,
CRAM Cram may refer to: * Cram (surname), a surname, and list of notable persons having the surname * Cram.com, a website for creating and sharing flashcards * Cram (Australian game show), a television show * ''Cram'' (game show), a TV game show that ...
or
JPEG XL JPEG XL is a royalty-free raster-graphics file format that supports both lossy and lossless compression. It is designed to outperform existing raster formats and thus become their universal replacement. Name The name consists of ''JPEG'' (fo ...
* 2008
Erdal Arıkan Erdal Arıkan is a Turkish professor in Electrical and Electronics Engineering Department at Bilkent University, Ankara, Turkey. He is known for his implementation of polar coding. Career Academic background Arıkan briefly served as a tenure- ...
introduces polar codes, the first practical construction of codes that achieves capacity for a wide array of channels


References

{{DEFAULTSORT:Timeline Of Information Theory
Information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
Information theory